|
OpenStack Mitaka : Use Cinder Storage (GlusterFS)
2016/05/10 |
|
It's possible to use Virtual Storages provided by Cinder if an Instance needs more disks.
Configure Virtual storage with GlusterFS backend on here.
+------------------+ +------------------+
10.0.0.50| [ Storage Node ] | 10.0.0.61| |
+------------------+ +-----+ Cinder-Volume | +-----+ GlusterFS #1 |
| [ Control Node ] | | eth0| | | eth0| |
| Keystone |10.0.0.30 | +------------------+ | +------------------+
| Glance |------------+------------------------------+
| Nova API |eth0 | +------------------+ | +------------------+
| Cinder API | | eth0| [ Compute Node ] | | eth0| |
+------------------+ +-----+ Nova Compute | +-----+ GlusterFS #2 |
10.0.0.51| | 10.0.0.62| |
+------------------+ +------------------+
|
| [1] |
GlusterFS server is required to be running on your LAN, refer to here.
This example uses a replication volume "vol_replica" provided by "glfs01" and "glfs02". |
| [2] | Configure Storage Node. |
|
# install from EPEL
[root@storage ~]#
yum --enablerepo=epel -y install glusterfs glusterfs-fuse
[root@storage ~]#
vi /etc/cinder/cinder.conf # add follows in the [DEFAULT] section enabled_backends = glusterfs # add follwos to the end [glusterfs] volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver glusterfs_shares_config = /etc/cinder/glusterfs_shares glusterfs_mount_point_base = $state_path/mnt_gluster
[root@storage ~]#
vi /etc/cinder/glusterfs_shares # create new : specify GlusterFS volumes glfs01.srv.world:/vol_replica chmod 640 /etc/cinder/glusterfs_shares [root@storage ~]# chgrp cinder /etc/cinder/glusterfs_shares [root@storage ~]# systemctl restart openstack-cinder-volume |
| [3] | Configure Compute Node to mount GlusterFS volume. |
|
# install from EPEL
[root@node01 ~]#
yum --enablerepo=epel -y install glusterfs glusterfs-fuse
[root@node01 ~]#
vi /etc/nova/nova.conf # add follows in the [DEFAULT] section volume_api_class = nova.volume.cinder.API systemctl restart openstack-nova-compute |
| [4] | For example, create a virtual disk "disk01" with 10GB. It's OK to work on any node. (This example is on Control Node) |
|
# set environment variable first [root@dlp ~(keystone)]# echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc [root@dlp ~(keystone)]# source ~/keystonerc
cinder create --display_name disk01 10
+--------------------------------+---------------------------------------+
| Property | Value |
+--------------------------------+---------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-05-10T12:26:27.000000 |
| description | None |
| encrypted | False |
| id | 0f0ec52b-27eb-46a4-a922-0faba41fecd2 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | disk01 |
| os-vol-host-attr:host | network.srv.world@glusterfs#GlusterFS |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 7a160aeddebd4e398fd22e6491f10baa |
| replication_status | disabled |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | 2016-05-10T12:26:27.000000 |
| user_id | f9c0838d5fcf4a87ba2e0b1653faa6d0 |
| volume_type | None |
+--------------------------------+---------------------------------------+
[root@dlp ~(keystone)]# cinder list +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | 0f0ec52b-27eb-46a4-a922-0faba41fecd2 | available | disk01 | 10 | - | false | | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ |
| [5] | Attach the virtual disk to an Instance. For the exmaple below, the disk is connected as "/dev/vdb". It's possible to use it as a storage to create a file system on it. |
|
[root@dlp ~(keystone)]# nova list +-----------+----------+---------+------------+-------------+-----------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +-----------+----------+---------+------------+-------------+-----------------------------------+ | 4e232450- | CentOS_7 | SHUTOFF | - | Shutdown | int_net=192.168.100.3, 10.0.0.201 | +-----------+----------+---------+------------+-------------+-----------------------------------+[root@dlp ~(keystone)]# nova volume-attach CentOS_7 0f0ec52b-27eb-46a4-a922-0faba41fecd2 auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 60abe5d1-1f4c-4876-b287-e1bbd0703639 | | serverId | 4e232450-4cae-47ba-a19a-1c59e3cbc91b | | volumeId | 60abe5d1-1f4c-4876-b287-e1bbd0703639 | +----------+--------------------------------------+ # the status of attached disk turns "in-use" like follows [root@dlp ~(keystone)]# cinder list +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | 0f0ec52b-27eb-46a4-a922-0faba41fecd2 | in-use | disk01 | 10 | - | false | 4e232450-4cae-47ba-a19a-1c59e3cbc91b | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ |